Rayshade
is a ray tracing program capable of rendering images composed
of a large number of primitive objects.
Rayshade
reads a series of lines supplied on the standard input or contained
in the file named on the command line.
After reading the input file,
rayshade
renders the image. As each scanline is rendered, pixels are written to
a Utah Raster RLE format image file. By default, this image file is written
to the standard output, and
information messages and statistics
are written to the standard error.
The input file consists of commands (denoted by keywords) followed by numerical or character arguments. Spaces, tabs, or newlines may be used to separate items in the file. Coordinates and vectors are specified in arbitrary floating-point units, and may be written with or without a decimal point. Colors are specified as red-green-blue floating-point triplets which indicate intensity and range from 0 (zero intensity) to 1. (full intensity).
The following sections describe the keywords which may be included
in the input file. Items in boldface type are literals, while
square brackets surround optional items.
Three types of light sources are supported: point, extended (area), and directional. Point sources are specified by a location in world space and produce shadows with sharp edges. Extended sources are specified by a location and a radius. They produce shadows with "fuzzy" edges (penumbrae), but increase ray tracing time considerably. Directional sources are specified by a direction. A maximum of 10 light sources may be defined.
In the definitions below, brightness specifies the intensity of the light source. If a single floating-point number is given, the light source emits a "white" light of the indicated normalized intensity. If three floating-point numbers are given, they are interpreted as the normalized red, green and blue components of the light source's color.
Lights are defined as follows:
Every primitive object has a surface associated with it. The surface specifies the color, reflectivity, and transparency of an object. A surface may be defined anywhere in the input file, provided it is defined before it is used. Surfaces are defined once, and may be associated with any number of primitive objects. A surface definition is given by:
Surf_name is the name associated with the surface. This name must be unique for each surface.
Ar, ag and ab are used to specify the red, green, and blue components of the surface's ambient color. This color is always applied to a ray striking the surface.
Dr, dg and db specify the diffuse color of the surface. This color, the brightness component of each light source whose light strikes the surface, and dot product of the incident ray and the surface normal at the point of intersection determine the color which is added to the color of the incident ray.
Sr, sg and sb are used to specify the specular color of the surface. The application of this color is controlled by the coef parameter, a floating-point number which indicates the power to which the dot product of the surface's normal vector at the point of intersection and the vector to each light source should be raised. This number is then used to scale the specular color of the surface, which is then added to the color of the ray striking the surface. This model (Phong lighting) simulates specular reflections of light sources on the surface of the object. The larger coef is, the smaller highlights will be.
Refl is a floating-point number between 0 and 1 which indicates the reflectivity of the object. If non-zero, a ray striking the surface will spawn a reflection ray. The color assigned to that ray will be scaled by refl and added to the color of the incident ray.
Transp is a floating-point number between 0 and 1 which indicates the transparency of the object. If non-zero, a ray striking the surface will spawn a ray which is transmitted through the object. The resulting color of this transmitted ray is scaled by transp and added to the color of the incident ray. The direction of the transmitted ray is controlled by the index parameter, which indicates the index of refraction of the surface.
The optional parameters translu and stcoef may be used
to give a surface a translucent appearance. Translu is the
translucency of the surface. If non-zero and a light source
illuminates the side of the surface opposite that being rendered,
diffuse lighting calculations are performed with respect to
the side of the surface facing the light,
and
the result is
scaled by translu and added to the color of the incident ray.
Thus, translu accounts for diffuse transmission of light
through the primitive.
Stcoef is similar to coef, but it applies to the specular
transmission of highlights. Note that in both cases the index
of refraction of the surface is ignored. By default, surfaces
have zero translucency.
The ray tracer is capable of rendering a number of primitive objects. Primitives may be specified inside of an object-definition block, in which case they are added to the list of primitives belonging to that object. In addition, primitives may be defined outside of object-definition blocks. Primitives such as these are added to the list of primitives belonging to the World object. See below for more details.
Rayshade usually ensures that a primitive's normal is pointing towards the origin of the incident ray when performing shading calculations. Exceptions to this rule are transparent primitives, for which rayshade uses the dot product of the normal and the incident ray to determine if the ray is entering or exiting the surface, and superquadrics, whose normals are never modified due to the nature of the ray/superquadric intersection code. Thus, all non-transparent primitives except superquadrics will in effect be double-sided.
Primitives are specified by lines of the form:
One key feature of rayshade is its ability to treat groups of primitives as objects which may transformed and instantiated at will. Objects are composed of groups of primitives and/or other objects and are specified in the input file as:
define object_name [grid xvoxels yvoxels zvoxels] [list] [primitives] [instances] defend [texturing information]The ordering of the various elements inside the object-definition block is inconsequential. Here, [instances] are any number of declarations of the form:
A special object named World is maintained internally by rayshade. Primitive definitions and object instantiations which do not appear inside an object-definition block are added to this object. When performing ray tracing, rays are intersected with the objects that make up the World object.
Internally, objects are stored by one of two means. By default, groups of primitives which make up an object are stored in a list. The constituents of such an object are stored in a simple linked-list. When a ray is intersected with such an object, the ray is tested for intersection with each object in the list. While the list is the default method of object storage, one may emphasize this fact in the input file by including the list keyword somewhere within the object-definition block.
The second form of internal object storage is the three-dimensional grid. The grid's total size is calculated by rayshade and is equal to the bounding box of the object that is engridded. A grid subdivides the space in which an object lies into an array of uniform box-shaped voxels. Each voxel contains a linked-list of objects and primitives which lie within that voxel. When intersecting a ray with an object which is stored in a grid, the ray is traced incrementally from voxel-to-voxel, and the ray is tested for intersected against each object in the linked list of each voxel that is visited. In this way the intersection of a ray with a collection of objects is generally made faster at the expense of increased storage requirements.
This form of object representation is enabled by including the the grid keyword somewhere within the object-definition block:
For convenience, one may also define surfaces inside of an object-definition block. Surfaces defined in this manner are nevertheless globally available.
In addition, object definitions may be nested. This facilitates the
definition of objects through the use of recursive programs.
Rayshade provides a means of applying solid procedural textures to surfaces of primitives. This is accomplished by supplying texture mapping information immediately following the definition of a primitive, object, or instance of an object. This allows one to texture individual primitives, objects, and individual instances of objects at will. Texturing information is supplied via a number of lines of the following form:
Versions of Perlin's Noise() and DNoise() functions are used to generate values for most of the interesting textures. There are eight available textures:
A colormap is an ASCII file 256 lines in length, each line containing three space-separated integers ranging from 0 to 255. The first number on the nth line specifies the red component of the nth entry in the colormap, the second number the green component, and the third the blue. The values in the colormap are normalized before being used in texturing functions. Textures which make use of colormaps generally compute an index into the colormap and use the corresponding entry to scale the ambient and diffuse components of a surface's color.
It is important to note that more than one texture may be applied to
an object at any time. In addition to being able to apply more than
one texture directly (by supplying multiple "texturing information" lines for
a single object), one may instantiate textured objects which, in turn,
may be textured or contain instances of objects which are textured, and so on.
Rayshade has the capability of including several kinds of atmospheric effects when rendering an image. Currently, two such effects are available:
This section clarifies how antialiasing and sampling of extended light sources are accomplished. Two types of anti-aliasing are supported; adaptive subdivision and so-called "jittered sampling".
Adaptive subdivision works by sampling each pixel at its corners. The contrast between these four samples is computed, and if too large, the pixel is subdivided into four equivalent sub-pixels and the process is repeated. The threshold contrast may be controlled via the -C option or the contrast command. There are separate thresholds for the red, green, and blue channels. If the contrast in any of the three is greater than the appropriate threshold value, the pixel is subdivided. The pixel-subdivision process is repeated until either the samples' contrast is less than the threshold or the maximum pixel subdivision level, specified via the -P option or the adaptive command, is reached. When the subdivision process is complete, a weighted average of the samples is taken as the color of the pixel.
Jittered sampling works by dividing each pixel into a number of square regions and tracing a ray through some point in each region. The exact location in each region is chosen randomly. The number of regions into which a pixel is subdivided is specified through the use of the -S option. The integer following this option specifies the square root of the number of regions.
Each extended light source is, in effect, approximated by a square grid of light sources. The length of each side of the square is equal to the diameter of the extended source. Each array element, which is square in shape, is in turned sampled by randomly choosing a point within that element to which a ray is traced from the point of intersection. If the ray does not intersect any primitive object before it strikes a light source element, there is said to be no shadow cast by that portion of the light source. The fraction of the light emitted by an extended light source which reaches the point of intersection is the number of elements which are not blocked by intervening objects divided by the total number of elements. The fraction is used to scale the intensity (color) of the light source, and this scaled intensity is then used in the various lighting calculations.
When jittered sampling is used, one shadow ray is traced to each extended source per shading calculation. The element to be sampled is determined by the region of the pixel through which the eye ray at the top of the ray tree passed.
When adaptive supersampling is used, the -S option or the samples command controls how may shadow rays are traced to each extended extended light source per shading calculation. Specifically, each extended source is approximated by a square array consisting of samples * samples elements. However, the corners of the array are skipped to save rendering time and to more closely approximate the circular projection of an extended light source. Because the corners are skipped, samples must be at least 3 if adaptive supersampling is being used.
Note that the meaning of the -S option (and the samples command) is different depending upon whether or not jittered sampling is being used.
While jittered sampling is generally slower than adaptive subdivision, it can be beneficial if the penumbrae cast by extended light sources take up a relatively large percentage of the entire image or if the image is especially prone to aliasing.
light 1.0 directional 1. 1. 1. surface red .2 0 0 .8 0 0 .5 .5 .5 32. 0.8 0. 1. surface green 0 .2 0 0 .8 0 0 0 0 0. 0. 0. 1. sphere red 8. 0. 0. -2. plane green 0. 0. 1. 0. 0. -10.
Passing this input to rayshade will result in an image of a red reflective sphere sitting on a white ground-plane being written to the standard output. Note that in this case, default values for eyep, lookp, up, screen, fov, and background are assumed.
A more interesting example uses instantiation to place multiple copies of an object at various locations in world space:
eyep 10. 10. 10. fov 20 light 1.0 directional 0. 1. 1. surface red .2 0 0 .8 0 0 .5 .5 .5 32. 0.8 0. 1. surface green 0 .2 0 0 .8 0 0 0 0 0. 0. 0. 1. surface white 0.1 0.1 0.1 0.8 0.8 0.8 0.6 0.6 0.6 30 0 0 0 define blob sphere red 0.5 .5 .5 0. sphere white 0.5 .5 -.5 0. texture marble scale 0.5 0.5 0.5 sphere red 0.5 -.5 -.5 0. sphere green 0.5 -.5 .5 0. defend object blob translate 1. 1. 0. object blob translate 1. -1. 0. object blob translate -1. -1. 0. object blob translate -1. 1. 0. grid 20 20 20
Here, an object named blob is defined to consist of four spheres, two of which are red and reflective. The object is stored as a simple list of the four spheres. The World object consists of four instances of this object, translated to place them in a regular pattern about the origin. Note that since the marbled sphere was textured in "sphere space" each instance of that particular sphere has exactly the same marble texture applied to it.
Of course, just as the object blob was instantiated as part of the World object, one may instantiate objects as part of any other object. For example, a series of objects such as:
define wheel sphere tire_color 1. 0 0 0 scale 1. 0.2 1. sphere hub_color 0.2 0 0. 0 defend define axle object wheel translate 0. 2. 0. object wheel translate 0. -2. 0. cylinder axle_color 0. -2. 0. 0. 2. 0. 0.1 defend define truck box truck_color 0. 0. 0. 5. 2. 2. /* Trailer */ box truck_color 6. 0 -1 2 2 1 /* Cab */ object axle translate -4 0 -2 object axle translate 4. 0. -2. defendcould be used to define a very primitive truck-like object.
Ray tracing is a computationally intensive process, and rendering complex scenes can take hours of CPU time, even on relatively powerful machines. There are, however, a number of ways of attempting to reduce the running time of the program.
The first and most obvious way is to reduce the number of rays which are traced. This is most simply accomplished by reducing the resolution of the image to be rendered. The -P option may be used to reduce the maximum pixel subdivision level. A maximum level of 0 will speed ray tracing considerably, but will result in obvious aliasing in the image. By default, a pixel will be subdivided a maximum of one time, giving a maximum of nine rays per pixel total.
Alternatively, the -C option or the contrast command may be used to decrease the number of instances in which pixels are subdivided. Using these options, one may indicate the maximum normalized contrast which is allowed before supersampling will occur. If the red, green or blue contrast between neighboring samples (taken at pixel corners) is greater than the maximum allowed, the pixel will be subdivided into four sub-pixels and the sampling process will recurse until the sub-pixel contrast is acceptable or the maximum subdivision level is reached.
The number of rays traced can also be lowered by making all surfaces non-reflecting and non-refracting or by setting maxdepth to a small number. If set to 0, no reflection or refraction rays will be traced. Lastly, using the -n option will cause no shadow rays to be traced.
In addition, judicious use of the grid command can reduce rendering times substantially. However, if an object consists of a relatively small number of simple objects, it will likely take less time to simply check for intersection with each element of the object than to trace a ray through a grid.
The C pre-processor can be used to make the creation and managing of input
files much easier. For example, one can create "libraries" of useful colors,
objects, and viewing parameters by using #define and #include. To use such
input files, run the C pre-processor on the file, and pipe the resulting
text to rayshade.
Examples/* example input files
Rayshade performs no automatic hierarchy construction. The intelligent placement of objects in grids and/or lists is entirely the job of the modeler.
While transparent objects may be wholly contained in other transparent objects, rendering partially intersecting transparent objects with different indices of refraction is, for the most part, nonsensical.
Rayshade is capable of using large amounts of memory. In the environment in which it was developed (machines with at least 8 Megabytes of physical memory plus virtual memory), this has not been a problem, and scenes containing several billion primitives have been rendered. On smaller machines, however, memory size can be a limiting factor.
The "Total memory allocated" statistic is the total space allocated by calls to malloc. It is not the memory high-water mark. After the input file is processed, memory is only allocated when refraction occurs (to push media onto a stack) and when ray tracing height fields (to dynamically allocate triangles).
The image produced will always be 24 bits deep.
Explicit or implicit specification of vectors of length
less than epsilon (1.E-6) results in undefined behavior.